UT Austin in the TREC 2012 Crowdsourcing Track’s Image Relevance Assessment Task

نویسندگان

  • Hyun Joon Jung
  • Matthew Lease
چکیده

We describe our submission to the Image Relevance Assessment Task (IRAT) at the 2012 Text REtrieval Conference (TREC) Crowdsourcing Track. Four aspects distinguish our approach: 1) an interface for cohesive, efficient topicbased relevance judging and reporting judgment confidence; 2) a variant of Welinder and Perona’s method for online crowdsourcing [17] (inferring quality of the judgments and judges during data collection in order to dynamically optimize data collection); 3) a completely unsupervised approach using no labeled data for either training or tuning; and 4) automatic generation of individualized error reports for each crowd worker, supporting transparent assessment and education of workers. Our system was built start-to-finish in two weeks, and we collected approximately 44,000 labels for about $40 US.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Overview of the TREC 2012 Crowdsourcing Track

In 2012, the Crowdsourcing track had two separate tasks: a text relevance assessing task (TRAT) and an image relevance assessing task (IRAT). This track overview describes the track and provides analysis of the track’s results.

متن کامل

Overview of the TREC 2013 Crowdsourcing Track

In 2013, the Crowdsourcing track partnered with the TREC Web Track and had a single task to crowdsource relevance judgments for a set of Web pages and search topics shared by the Web Track. This track overview describes the track and provides analysis of the track’s results.

متن کامل

Using Hybrid Methods for Relevance Assessment in TREC Crowd '12

The University of Iowa (UIowaS) submitted three runs to the TRAT subtask of the 2012 TREC Crowdsourcing track. The task objective was to evaluate approaches to crowdsourcing high quality relevance judgments for a text document collection. We used this as an opportunity to examine three hybrid (combination of human-based and machine-based) approaches while simultaneously limiting time and cost. ...

متن کامل

Northeastern University Runs at the TREC13 Crowdsourcing Track

The goal of the TREC 2012 Crowdsourcing Track was to evaluate approaches to crowdsourcing high quality relevance judgments for images and text documents. This paper describes our submission to the Text Relevance Assessing Task. We explored three different approaches for obtaining relevance judgments. Our first two approaches are based on collecting a limited number of preference judgments from ...

متن کامل

Crowdsourcing Blog Track Top News Judgments at TREC

Since its inception, the venerable TREC retrieval conference has relied upon specialist assessors or participating groups to create relevance judgments for the tracks that it runs. However, recently crowdsourcing has been proposed as a possible alternative to traditional TREC-like assessments, supporting fast accumulation of judgments at a low cost. 2010 was the first year that TREC experimente...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013